Search Results: "sven"

2 November 2015

Sven Hoexter: Systemadministration and education

USENIX lately started a new journal called JESA to tackle the issue of education for Systemadministrators. For the first issue Tom Limoncelli wrote an open letter which tries to summarize the current situation the industry faces. For me it's kind of a problem statement one can use to start thinking about solutions. Currently I don't see something like a formal education to call yourself Systemadministrator or Systemengineer anywhere near. And I don't think it's required. But still the expectations I see on both ends - employer and employee - often differ a lot in all kind of directions. In Germany we've a very organized (some call it bureaucratic) system of non academic education, organized as an apprenticeship. And like Tom wrote in the open letter mentioned above many IT departments do not follow best practise, and even more do so unintentionally because they never got that far. But what kind of people can you expect from this system when they got formed for three years in a rather sloppy environment? So there is a lot to fix, but as usual I've doubts when I think about possible solutions. Do I expect too much from the education system and/or the people? Do I look at the wrong people? Is this education system the right system to educate this kind of people I'd like to work with?

Sven Hoexter: Ubuntu 14.04 php-apcu backport arrived

In case you're one of those troubled by the sorry state of the apcu release in Ubuntu 14.04 you can now switch to the official backport. Micah Gersten was kind enough to invest some time and got it uploaded.

27 October 2015

Lunar: Reproducible builds: week 26 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes Mattia Rizzolo created a bug report to continue the discussion on storing cryptographic checksums of the installed .deb in dpkg database. This follows the discussion that happened in June and is a pre-requisite to add checksums to .buildinfo files. Niko Tyni identified why the Vala compiler would generate code in varying order. A better patch than his initial attempt still needs to be written. Packages fixed The following 15 packages became reproducible due to changes in their build dependencies: alt-ergo, approx, bin-prot, caml2html, coinst, dokujclient, libapreq2, mwparserfromhell, ocsigenserver, python-cryptography, python-watchdog, slurm-llnl, tyxml, unison2.40.102, yojson. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: reproducible.debian.net pbuilder has been updated to version 0.219~bpo8+1 on all eight build nodes. (Mattia Rizzolo, h01ger) Packages that FTBFS but for which no open bugs have been recorded are now tested again after 3 days. Likewise for depwait packages. (h01ger) Out of disk situations will not cause IRC notifications anymore. (h01ger) Documentation update Lunar continued to work on writing documentation for the future reproducible-builds.org website. Package reviews 44 reviews have been removed, 81 added and 48 updated this week. Chris West and Chris Lamb identified 70 fail to build from source issues. Misc. h01ger presented the project in Mexico City at the 3er Congreso de Seguridad de la Informaci n where it became clear that we lack academic papers related to reproducible builds. Bryan has been doing hard work to improve reproducibility for OpenWrt. He wrote a report linking to the patches and test results he published.

17 October 2015

Sven Hoexter: TclCurl snapshot uploaded to unstable

While I was pondering if I should drop the tclcurl package and get it removed from Debian Christian Werner from Androwish (a Tcl/Tk port for Android) started to fix the open bugs. Thanks a lot Christian! I've now uploaded a new TclCurl package to unstable based on the code in the new upstream repository plus the patches from Christian. In case you're one of the few TclCurl users out there please try the new package. I'm still pondering if it's worth to keep the package. For the last five years or so I could get along with the Tcl http module just fine and thus no longer use TclCurl myself. In case someone would like to adopt it just write me a mail, I'd be happy to give it away.

28 September 2015

Sven Hoexter: HP tooling switches from hpacucli to hpssacli

I guess I'm a bit late in the game but I just noticed that HP no longer provides the venerable hpacucli tool for Debian/jessie and Ubuntu 14.04. While you could still install it (as I did from an internal repository) it won't work anymore on Gen9 blades. The replacement seems to be hpssacli, and it's available as usual from the HP repository. I should've read the manual.

27 September 2015

Sven Hoexter: 1blu hack and the usual TLS certificate key madness

Some weeks ago the german low cost hoster 1blu got hacked and there was a bit of fuss later about the TLS certificates issued by 1blu. I think they reissued all of them. Since I knew that some hoster offer to generate the complete cert + key package for the customer I naively assumed that only the lazy and novice customers were the victims of that issue. Today, while helping someone, I learned that 1blu forces you to use the key generated by them for certificates included in a virtual server bundle and probably other bundles. That makes those bundles a lot less attractive since the included certificate is not useful at all. One could of course argue that a virtual server is not trustworthy anyway, but I'd like to believe for now that it's more complicated to extract stuff from all running virtual servers compared to dumping the central database / key repository. Maybe it's time to create a wrapper around openssl that is less opaque to novice users so we can get rid of key generation by a third party one day. In the end it's a disasterous trend that only got started because of usability issues.

25 September 2015

Sven Hoexter: Ubuntu 14.04 php-apcu 4.0.7 backport

Looks like the php-apcu release shipped with Ubuntu 14.04 is really buggy. Since nobody at Ubuntu seems to care about packages in universe I've added a backport of php-apcu 4.0.7 to my ppa. It's just a rebuild, so no magic involved. Update: I've used the requestbackport thingy now to request a backport the Ubuntu way.

Sven Hoexter: getting rid of xchat

I'm lazy. So I sticked to xchat for way too long. It seems to be dead since 2010 but luckily some good souls maintain a fork called hexchat. That's what I moved myself to a few weeks ago. Now looking at the Debian xchat package I feel the urgent need to fill a request for removal. Sine I'm not a member of QA I asked for some advice, but the feedback is a bit sparse so far. Maybe everyone still using xchat could just switch to hexchat so we can remove xchat next year and nobody would notice it? The only obvious drawback I can see at the moment is the missing Tcl plugin. The rest of the migration is more or less reconfiguring everything to your preferences.

Sven Hoexter: whiteboards

If you visit your potentially new team in the office and there is no whiteboard, or only a barely used one, you might be better off looking for a different team.

14 September 2015

Sven Hoexter: mysql password hash

Note to myself how the mysql PASSWORD() function works as a one-liner:
echo -n foobar   openssl sha1 -binary   openssl sha1 -hex   sed 's/(stdin)= /*/'   tr '[:lower:]' '[:upper:]'

4 September 2015

Sven Hoexter: wildcard SubjectAlternativeNames

After my experiment with a few more SANs then usual, I received the advice that multiple wildcards should in theory work as well. In practise I could not get something like 'DNS: . .sven.stormbind.net' to be accepted by any decent browser. The other obstacle would be to find a CA to sign it for you. As far as I can tell everyone sticks to the wildcard definition given in the CAB BR.
Wildcard Certificate: A Certficiate containing an asterisk (*) in the left-most position of any of the
Subject Fully-Qualified Domain Names contained in the Certificate.
That said I could reduce the number of SANs from 1960 to 90 when I added a wildcard SAN per person. That's also a number small enough for the Internet Explorer to not fail during the handshake. I rejected that option initially because I thought that multiple wildcards on one certificate are not accepted by browsers. In practise it just seems to be a rarely available option provided on the market.

30 August 2015

Sven Hoexter: 1960 SubjectAlternativeNames on one certificate

tl;dr; You can add 1960+ SubjectAlternativeNames on one certificate and at least Firefox and Chrome are working fine with that. Internet Explorer failed but I did not investigate why. So why would you want to have close to 2K SANs on one certificate? While we're working on adopting a more dynamic development workflow at my workplace we're currently bound to a central development system. From there we serve a classic virtual hosting setup with "projectname.username.devel.ourdomain.example" mapped on "/web/username/projectname/". That is 100% dynamic with wildcard DNS entries and you can just add a new project to your folder and use it directly. All of that is served from just a single VirtualHost. Now our developers started to go through all our active projects to make them fit for serving via HTTPS. While we can verify the proper usage of https on our staging system where we've validating certificates, that's not the way you'd like to work. So someone approached me to look into a solution for our development system. Obvious choices like wildcard certificates do not work here because we've two dynamic components in the FQDN. So we would've to buy a wildcard certificate for every developer and we would've to create a VirtualHost entry for every new developer. That's expensive and we don't want all that additional work. So I started to search for documented limits on the number of SANs you can have on a certificate. The good news: there are none. The RFC does not define a limit. So much about the theory. ;) Following Ivans excellent documentation I setup an internal CA and an ugly "find ... sed ... tr ..." one-liner later I had a properly formated openssl config file to generate a CSR with all 1960 "projectname.username..." SAN combinations found on the development system. Two openssl invocations (CSR generation and signing) later I had a signed certificate with 1960 SANs on it. I imported the internal CA I created in Firefox and Chrome, and to my surprise it worked. Noteworthy: To sign with "openssl ca" without interactive prompts you've to use the "-batch" option. I'm thinking about regenerating the certificate every morning so our developers just have to create a new project directory and within 24h serving via HTTPS would be enabled. The only thing I'm currently pondering about is how to properly run the CA in a corporate Windows world. We could of course ask the Windows guys to include it for everyone but then we would've to really invest time in properly running the CA. I'd like to avoid that hassle. So I'd guess we just stick to providing the CA for those developers who need it. This all or nothing model is a constant PITA, and you really do not want to get owned via your own badly managed CA. :( Regarding Internet Explorer it jumped in my face with a strange error message that recommended to enable TLS 1.0, 1.1 and 1.2 in the options menu. Of course that's already enable. I'll try to take a look at the handshake next week, but I bet we've to accept for the moment that IE will not work with so many SANs. Would be interesting to try out Windows 10 with Spartan, but well I'm not that interested in Windows to invest more time on that front. Other TLS implementations, like Java, would be also interesting to test.

Ben Hutchings: Securing www.decadent.org.uk

Sven Hoexter replied to my previous entry to say that WoSign also provides free DV TLS certificates. What's more, they allow up to 10 alternate names, unlike StartSSL. So I've gone ahead with a new certificate for www.decadent.org.uk and other virtual servers including git.decadent.org.uk. WoSign sensibly mandates a key length of 2048 bits, and together with the default TLS configuration for Apache in Debian 'jessie' this resulted in a A- rating from Qualys SSL Server Test. I then disabled non-PFS and otherwise weak cipher suites in /etc/apache2/mods-enabled/ssl.conf:
SSLCipherSuite HIGH:!aNULL:!kRSA:!3DES
This resulted in an A rating. Finally, I added redirection of all plaintext HTTP connections to HTTP-S (which is easier than working out how to make the virtual server work with and without TLS, anyway). I enabled HSTS for each VirtualHost:
Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
This resulted in an A+ rating. These web sites will now be inaccessible to Java 6 and IE on Windows XP, but that's no great loss (1 in 1500 hits over the past few weeks).

4 August 2015

Sven Hoexter: TLS scanning and IPv6

I just noticed that SSLLabs now supports IPv6. I could not find an announcement for it but I'd guess it's already there for some time. There is also a new sslscan release in experimental with IPv6 support. Thanks to Marvin and formorer who finally made that happen. Update: Since this won't hit official backports.d.o soon I've done a pbuilder build for jessie.

3 August 2015

Sven Hoexter: Failing with F5: CMP - Clustered Multiprocessing

The last Whiteboard Wednesday tackled a few of the gotchas of the F5 CMP implementation. I take that as an opportunity to finish a blog post I started to write six month ago on this topic. I've run into two scenarios where CMP got in my way: All of this happened on TMOS 11.4.1+HF run on BigIP 2K systems. Since F5 is sometimes moving very quickly you might not experience those issues on other hardware or with a different software release. applying low and strict connection limits Apparently connection limits are counted on a per tmm level and are not shared between the tmm processes running on different CPUs. So if you've two CPU cores with HT you'll see four tmm processes that will all enforce the connection limit. If you have a max connection limit set to two you might still see a maximum of eight connections. That is a very bad thing if you abuse the F5 to limit your outbound connections because your counter part is not able to enforce connection limits, but forces you via a contract to not open more then two connections. Disable CMP and it will all end on the same tmm process and suddenly your connection limits work as expected. If your connection limit is big enough you can cheat, say it's twelve connections, you can configure a limit of three. With four processes that makes a max of twelve connections as required. Though due to the hashing and distribution among the tmm processes it could happen that you reject connections on one process while others are idle. Another option would be to enter the dirty land of iRules and tables to maintain your own connection count table. There are examples given in the F5 devcentral community, but the best resolution you can get, without starting to scan the whole table on every request, is X requests per minute. active health checks Active monitoring checks were added to detect the operational state of our application servers. Now imagine you have some realtime processing with over a hundred requests per minute. So in case you've issues with one instance you'd like to remove it from the pool rather sooner then later, which requires frequent checking. So here is a simple active http check:
defaults-from http-appmon
destination *:*
interval 9
recv "^HTTP/1\\.1 200 OK"
send "GET /mon/state HTTP/1.1\\r\\nHost: MON\\r\\nConnection: Close\\r\\n\\r\\n"
time-until-up 0
timeout 2
up-interval 1
(Configuration options are described here) With an up-interval of 1 second and a timeout of 2 seconds someone naive like me expected that a fatal issue like a crashed application would be detected within three seconds and no more requests should hit the application after second 3. That turned out to be slightly longer, at least 4s for this case, often longer. Also for a regular maintenance we had to instruct our scripts to flip the application monitor to unavailable and wait for interval X tmm process count + timeout + 1s + max allowed request duration. That is not an instant off. If you use tcpdump (on the F5 where it's patched to show you the tmm that received the request) to look at the health check traffic you'll notice that the tmm threads pass around the handle to do a health check. So my guess is the threads do only sync the health check state of the application when they're due to run the health check. Makes sense performance wise but if you operate with longer timeouts and bigger intervals you will definetly loose a notable amount of requests. Even in our low end case we already loose a notable amount of requests in 4s. While you could resent them you never know exactly if a request was already processed or not. So you have to ensure that somewhere else in your stack. One option is to disable CMP but I guess that's not an option for most people that are required to scale out to handle the load and not only for the sake of redundancy. If you're running the balancer only for the sake of redundancy the choice is easy. You find a hint on this kind of issue if you lookup the documentation for the inband monitor: Note: Systems with multiple tmm processes use a per-process number to calculate failures, depending on the specified load balancing method. That case is a bit different due to the nature of the monitor but points in the same direction. The actual check is executed by bigd according to this document, so I'm pretty sure I'm still missing some pieces of the picture. Just be warned that you've to really test your outage scenarios and you should double check it if you run a transaction based service that is sensitive to lost requests.

Sven Hoexter: mod_realdoc packages

I'm currently looking at mod_realdoc from Rasmus Lerdorf / etsy, to find out if it can scratch one of my itches at work. Since the packaging in the etsy github repository is slightly incomplete, and there are no real tarball releases I've rolled one on my own. For Debian/jessie I've dumped a test build here and for Ubuntu 14.04 I've wrestled with launchpad.net and created a ppa.

24 July 2015

Simon Kainz: DUCK challenge: week 3

One more update on the the DUCK challenge: In the current week, the following packages were fixed and uploaded into unstable: So we had 10 packages fixed and uploaded by 8 different uploaders. A big "Thank You" to you!! Since the start of this challenge, a total of 35 packages, uploaded by 25 different persons were fixed. Here is a quick overview:
Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7
# Packages 10 15 10 - - - -
Total 10 25 35 - - - -
The list of the fixed and updated packages is availabe here. I will try to update this ~daily. If I missed one of your uploads, please drop me a line. There is still lots of time till the end of DebConf15 and the end of the DUCK Challenge, so please get involved. Pevious articles are here: Week 1, Week 2.

22 July 2015

Sven Hoexter: moto g falcon CM 12.1 nightly - eating the battery alive

At least the nightly builds from 2015-07-21 to 2015-07-24 eat the battery alive. Until that one is fixed one can downgrade to cm-12.1-20150720-NIGHTLY-falcon.zip. The downgrade fixed the issue for me. Update: I'm now running fine with the build from 2015-07-26.

21 July 2015

Sven Hoexter: O: courierpassd

In case you're one of the few still depending on courierpassd and would like to see it to be part of stretch, please pick it up. I'm inclined to fill a request for removal before we release stretch in case nobody picks it up.

14 July 2015

Sven Hoexter: some more thoughts on DHE usage

I looked a bit more into the possible arguments to keep DHE alive once you support ECDHE and you only expect traffic from endusers with a real browser (machine2machine traffic with access from obscure libraries is a different matter). From the SSL Labs checks we can deduce the following components that support DHE but do not support ECDHE: Windows XP does not support forward secrecy at all, and all other browsers beside IE (except for the first Chrome releases) bring their own TLS implementation. A very interesting data point is the cipher usage graphing provided by Wikimedia you can find on this dashboard. Currently I see roughly 4% of connections to be negotiated with some variant of a DHE cipher. That is after all not much, so I'm tempted to disable DHE for HTTPS where I've working ECDHE support soon. I'd first like to gather some stats from the BigIPs which I'm babysitting but that unfortunately is not that easy. As far as I can tell the only option is to insert a header a via an iRule and log the header on a backend system. Or you can setup HSL and log directly from the iRule.

Next.

Previous.